Deploying machine learning models in production may allow adversaries to infer sensitive information about training data. There is a vast literature analyzing different types of inference risks, ranging from membership inference to reconstruction attacks. Inspired by the success of games (i.e., probabilistic experiments) to study security properties in cryptography, some authors describe privacy inference risks in machine learning using a similar game-based style. However, adversary capabilities and goals are often stated in subtly different ways from one presentation to the other, which makes it hard to relate and compose results. In this paper, we present a game-based framework to systematize the body of knowledge on privacy inference risks in machine learning.
translated by 谷歌翻译
A distribution inference attack aims to infer statistical properties of data used to train machine learning models. These attacks are sometimes surprisingly potent, but the factors that impact distribution inference risk are not well understood and demonstrated attacks often rely on strong and unrealistic assumptions such as full knowledge of training environments even in supposedly black-box threat scenarios. To improve understanding of distribution inference risks, we develop a new black-box attack that even outperforms the best known white-box attack in most settings. Using this new attack, we evaluate distribution inference risk while relaxing a variety of assumptions about the adversary's knowledge under black-box access, like known model architectures and label-only access. Finally, we evaluate the effectiveness of previously proposed defenses and introduce new defenses. We find that although noise-based defenses appear to be ineffective, a simple re-sampling defense can be highly effective. Code is available at https://github.com/iamgroot42/dissecting_distribution_inference
translated by 谷歌翻译
联合学习中的隐私(FL)以两种不同的粒度进行了研究:项目级,该项目级别保护单个数据点和用户级别,该数据点保护联邦中的每个用户(参与者)。几乎所有的私人文献都致力于研究这两种粒度的隐私攻击和防御。最近,主题级隐私已成为一种替代性隐私粒度,以保护个人(数据主体)的隐私(数据主题),其数据分布在跨索洛FL设置中的多个(组织)用户。对手可能有兴趣通过攻击受过训练的模型来恢复有关这些人(又称emph {data主体})的私人信息。对这些模式的系统研究需要对联邦的完全控制,而实际数据集是不可能的。我们设计了一个模拟器,用于生成各种合成联邦配置,使我们能够研究数据的属性,模型设计和培训以及联合会本身如何影响主题隐私风险。我们提出了\ emph {主题成员推理}的三个攻击,并检查影响攻击功效的联邦中所有因素之间的相互作用。我们还研究了差异隐私在减轻这种威胁方面的有效性。我们的收获概括到像女权主义者这样的现实世界数据集中,对我们的发现赋予了信任。
translated by 谷歌翻译
分发推断,有时称为财产推断,Infers关于从访问该数据训练的模型设置的训练的统计属性。分发推理攻击可能会在私人数据培训培训时构成严重风险,但难以从统计机器学习的内在目的区分 - 即生产捕获统计特性的模型。 yeom等人的推导框架的动机,我们提出了一般的主要定义,这足以描述区分可能训练分布的广泛攻击。我们展示了我们的定义如何捕获基于比率的属性推论攻击以及新类型的攻击,包括揭示训练图的平均节点度或聚类系数。为了理解分发推理风险,我们介绍了一种量化,通过将观察到的泄漏与泄漏直接提供给对手的样本来进行泄漏来介绍观察到的泄漏。我们在一系列不同的发行版中报告了一系列不同的分布,并使用全新的黑匣子攻击和最先进的白盒攻击版本。我们的研究结果表明,廉价的攻击往往与昂贵的元分类器攻击一样有效,并且攻击有效性令人惊讶的不对称。
translated by 谷歌翻译
Vision Transformers (ViTs) have gained significant popularity in recent years and have proliferated into many applications. However, it is not well explored how varied their behavior is under different learning paradigms. We compare ViTs trained through different methods of supervision, and show that they learn a diverse range of behaviors in terms of their attention, representations, and downstream performance. We also discover ViT behaviors that are consistent across supervision, including the emergence of Offset Local Attention Heads. These are self-attention heads that attend to a token adjacent to the current token with a fixed directional offset, a phenomenon that to the best of our knowledge has not been highlighted in any prior work. Our analysis shows that ViTs are highly flexible and learn to process local and global information in different orders depending on their training method. We find that contrastive self-supervised methods learn features that are competitive with explicitly supervised features, and they can even be superior for part-level tasks. We also find that the representations of reconstruction-based models show non-trivial similarity to contrastive self-supervised models. Finally, we show how the "best" layer for a given task varies by both supervision method and task, further demonstrating the differing order of information processing in ViTs.
translated by 谷歌翻译
Real-world tasks are largely composed of multiple models, each performing a sub-task in a larger chain of tasks, i.e., using the output from a model as input for another model in a multi-model pipeline. A model like MATRa performs the task of Crosslingual Transliteration in two stages, using English as an intermediate transliteration target when transliterating between two indic languages. We propose a novel distillation technique, EPIK, that condenses two-stage pipelines for hierarchical tasks into a single end-to-end model without compromising performance. This method can create end-to-end models for tasks without needing a dedicated end-to-end dataset, solving the data scarcity problem. The EPIK model has been distilled from the MATra model using this technique of knowledge distillation. The MATra model can perform crosslingual transliteration between 5 languages - English, Hindi, Tamil, Kannada and Bengali. The EPIK model executes the task of transliteration without any intermediate English output while retaining the performance and accuracy of the MATra model. The EPIK model can perform transliteration with an average CER score of 0.015 and average phonetic accuracy of 92.1%. In addition, the average time for execution has reduced by 54.3% as compared to the teacher model and has a similarity score of 97.5% with the teacher encoder. In a few cases, the EPIK model (student model) can outperform the MATra model (teacher model) even though it has been distilled from the MATra model.
translated by 谷歌翻译
深度学习(DL)模型越来越多地为应用程序提供多种应用。不幸的是,这种普遍性也使它们成为提取攻击的有吸引力的目标,这些目标可以窃取目标DL模型的体系结构,参数和超参数。现有的提取攻击研究观察到不同DL模型和数据集的攻击成功水平不同,但其易感性背后的根本原因通常仍不清楚。确定此类根本原因弱点将有助于促进安全的DL系统,尽管这需要在各种情况下研究提取攻击,以确定跨攻击成功和DL特征的共同点。理解,实施和评估甚至单一攻击所需的绝大部分技术努力和时间都使探索现有的大量独特提取攻击方案是不可行的,当前框架通常设计用于仅针对特定攻击类型,数据集和数据集,以及硬件平台。在本文中,我们介绍捏:一个有效且自动化的提取攻击框架,能够在异质硬件平台上部署和评估多个DL模型和攻击。我们通过经验评估大量先前未开发的提取攻击情景以及次级攻击阶段来证明捏合的有效性。我们的主要发现表明,1)多个特征影响开采攻击成功跨越DL模型体系结构,数据集复杂性,硬件,攻击类型和2)部分成功的提取攻击显着增强了进一步的对抗攻击分期的成功。
translated by 谷歌翻译
深度学习研究引起了广泛的兴趣,导致出现了各种各样的技术创新和应用。由于深度学习研究的很大比例关注基于视觉的应用,因此存在使用其中一些技术来实现低功率便携式医疗保健诊断支持解决方案的潜力。在本文中,我们提出了一个基于硬件的嵌入式软件实施显微镜诊断支持系统,用于POC案例研究:(a)厚血液涂片中的疟疾,(b)痰液样品中的结核病,以及(c)(c)粪便中的肠道寄生虫感染样品。我们使用基于挤压网络的模型来减少网络大小和计算时间。我们还利用训练有素的量化技术来进一步减少学习模型的记忆足迹。这使基于显微镜的病原体检测将实验室专家级别的精度分类为独立的嵌入式硬件平台。与基于CPU的常规实施相比,提议的实施功率更高6倍,并且推理时间为$ \ sim $ 3 ms/示例。
translated by 谷歌翻译
机器学习(ML)提供了在具有较大特征空间和复杂关联的数据中通常在数据中检测和建模关联的强大方法。已经开发了许多有用的工具/软件包(例如Scikit-learn),以使数据处理,处理,建模和解释的各种要素可访问。但是,对于大多数研究人员来说,将这些元素组装成严格,可复制,无偏见和有效的数据分析管道并不是微不足道的。自动化机器学习(AUTOML)试图通过简化所有人的ML分析过程来解决这些问题。在这里,我们介绍了一个简单,透明的端到端汽车管道,设计为一个框架,以轻松进行严格的ML建模和分析(最初限于二进制分类)。 Streamline专门设计用于比较数据集,ML算法和其他AutoML工具之间的性能。通过使用精心设计的一系列管道元素,通过提供完全透明且一致的比较基线,它是独特的,包括:(1)探索性分析,(2)基本数据清洁,(3)交叉验证分区,(4)数据缩放和插补,(5)基于滤波器的特征重要性估计,(6)集体特征选择,(7)通过15个已建立算法的“ Optuna”超参数优化的ML建模(包括较不知名的基因编程和基于规则的ML ),(8)跨16个分类指标的评估,(9)模型特征重要性估计,(10)统计显着性比较,以及(11)自动导出所有结果,图,PDF摘要报告以及可以轻松应用于复制数据。
translated by 谷歌翻译
低功率边缘-AI功能对于支持元视野的设备扩展现实(XR)应用至关重要。在这项工作中,我们研究了两个代表性的XR工作负载:(i)手动检测和(ii)眼睛分割,用于硬件设计空间探索。对于这两种应用,我们都会训练深层神经网络,并分析量化和硬件特定瓶颈的影响。通过模拟,我们评估了CPU和两个收缩推理加速器实现。接下来,我们将这些硬件解决方案与先进的技术节点进行比较。评估了将最新的新兴非易失性记忆技术(STT/SOT/VGSOT MRAM)集成到XR-AI推论管道中的影响。我们发现,可以通过在7nm节点的设计中引入非挥发性记忆来实现手部检测(IPS = 40)和眼部分割(IPS = 6)的显着能源益处(IPS = 40)(IPS = 6)。 (推断每秒)。此外,由于MRAM与传统的SRAM相比,由于MRAM的较小形式,我们可以大大减少面积(> = 30%)。
translated by 谷歌翻译